Skip to main content

About the Provider

Alibaba Cloud is the cloud computing arm of Alibaba Group and the creator of the Qwen model family. Through its open-source initiative, Alibaba has released state-of-the-art language and multimodal models under permissive licenses, enabling developers and enterprises to build powerful AI applications across diverse domains and languages.

Model Quickstart

This section helps you quickly get started with the Qwen/Qwen3-VL-Plus model on the Qubrid AI inferencing platform. To use this model, you need:
  • A valid Qubrid API key
  • Access to the Qubrid inference API
  • Basic knowledge of making API requests in your preferred language
Once authenticated with your API key, you can send inference requests to the Qwen/Qwen3-VL-Plus model and receive responses based on your input prompts. Below are example placeholders showing how the model can be accessed using different programming environments.
You can choose the one that best fits your workflow.
from openai import OpenAI

# Initialize the OpenAI client with Qubrid base URL
client = OpenAI(
    base_url="https://platform.qubrid.com/v1",
    api_key="QUBRID_API_KEY",
)

# Create a streaming chat completion
stream = client.chat.completions.create(
    model="Qwen/Qwen3-VL-Plus",
    messages=[
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What is in this image? Describe the main elements."
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
            }
          }
        ]
      }
    ],
    max_tokens=16384,
    temperature=0.1,
    top_p=1,
    stream=True
)

# If stream = False comment this out
for chunk in stream:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
print("\n")

# If stream = True comment this out
print(stream.choices[0].message.content)

Model Overview

Qwen3 VL Plus is a vision-language model that understands images and text together.
  • Built on a Transformer decoder-only architecture with a ViT visual encoder and up to 256K token context, it delivers strong multimodal understanding across image analysis, OCR, and visual question answering tasks.
  • Trained on a multilingual multimodal dataset combining text and images, it is designed for production-ready vision inference workflows.

Model at a Glance

FeatureDetails
Model IDQwen/Qwen3-VL-Plus
ProviderAlibaba Cloud (Qwen Team)
ArchitectureTransformer decoder-only (Qwen3-VL with ViT visual encoder)
Model SizeN/A
Parameters5
Context LengthUp to 256K Tokens
Release Date2025
LicenseApache 2.0
Training DataMultilingual multimodal dataset (text + images)

When to use?

You should consider using Qwen3 VL Plus if:
  • You need image analysis for objects, scenes, and attributes with natural language descriptions
  • Your application requires OCR-style extraction of text from documents, screenshots, and real-world photos
  • Your use case involves visual question answering over charts, UIs, and complex diagrams

Inference Parameters

Parameter NameTypeDefaultDescription
StreamingbooleantrueEnable streaming responses for real-time output.
Temperaturenumber0.1Lower temperature for more deterministic output.
Max Tokensnumber16384Maximum number of tokens the model can generate.
Top Pnumber1Controls nucleus sampling for more predictable output.
Reasoning EffortselectmediumAdjusts the depth of reasoning and problem-solving effort. Higher settings yield more thorough responses at the cost of latency.

Key Features

  • Strong Multimodal Understanding: Jointly processes image and text inputs for accurate visual analysis and description.
  • ViT Visual Encoder: Dedicated vision transformer encoder for high-fidelity image feature extraction.
  • Up to 256K Context: Supports long multimodal conversations and extended document analysis.
  • OCR and Document Understanding: Reliable text extraction from documents, screenshots, and real-world photos.
  • Apache 2.0 License: Fully open-source with unrestricted commercial use.

Summary

Qwen3 VL Plus is Alibaba’s production-ready vision-language model built for multimodal understanding at scale.
  • It uses a Transformer decoder-only architecture with a ViT visual encoder, trained on a multilingual multimodal dataset.
  • It supports image analysis, OCR extraction, visual QA, and chart and diagram understanding with up to 256K context.
  • The model delivers strong multimodal understanding with configurable reasoning effort and streaming inference.
  • Licensed under Apache 2.0 for full commercial use.